• Home
  • Neural networks
  • OpenAccess
    • List of Articles Neural networks

      • Open Access Article

        1 - A Hybrid Neural Network Ensemble Model for Credit Risk Assessment
        shaban elahi Ahmad ghodselahi hamidreza naji
        Banking is a specific industry that deals with capital and risk for making profit. Credit risk as the most important risk, is an active research domain in financial risk management studies. In this paper a hybrid model for credit risk assessment which applies ensemble l More
        Banking is a specific industry that deals with capital and risk for making profit. Credit risk as the most important risk, is an active research domain in financial risk management studies. In this paper a hybrid model for credit risk assessment which applies ensemble learning for credit granting decisions is designed. Combining clustering and classification techniques resulted in system improvement. The German bank real dataset was used for neural network training. The proposed model implemented as credit risk evaluation multi agent system and the results showed the proposed model has higher accuracy, better performance and lesser cost in applicant classification when compared with other credit risk evaluation methods Manuscript profile
      • Open Access Article

        2 - Comparing A Hybridization of Fuzzy Inference System and Particle Swarm Optimization Algorithm with Deep Learning to Predict Stock Prices
        Majid Abdolrazzagh-Nezhad mahdi kherad
        Predicting stock prices by data analysts have created a great business opportunity for a wide range of investors in the stock markets. But the fact is difficulte, because there are many affective economic factors in the stock markets that they are too dynamic and compl More
        Predicting stock prices by data analysts have created a great business opportunity for a wide range of investors in the stock markets. But the fact is difficulte, because there are many affective economic factors in the stock markets that they are too dynamic and complex. In this paper, two models are designed and implemented to identify the complex relationship between 10 economic factors on the stock prices of companies operating in the Tehran stock market. First, a Mamdani Fuzzy Inference System (MFIS) is designed that the fuzzy rules set of its inference engine is found by the Particle Swarm Optimization Algorithm (PSO). Then a Deep Learning model consisting of 26 neurons is designed wiht 5 hidden layers. The designed models are implemented to predict the stock prices of nine companies operating on the Tehran Stock Exchange. The experimental results show that the designed deep learning model can obtain better results than the hybridization of MFIS-PSO, the neural network and SVM, although the interperative ability of the obtained patterns, more consistent behavior with much less variance, as well as higher convergence speed than other models can be mentioned as significant competitive advantages of the MFIS-PSO model Manuscript profile
      • Open Access Article

        3 - Multi-level ternary quantization for improving sparsity and computation in embedded deep neural networks
        Hosna Manavi Mofrad Seyed Ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does’nt take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile
      • Open Access Article

        4 - Improvement of intrusion detection system on Industrial Internet of Things based on deep learning using metaheuristic algorithms
        mohammadreza zeraatkarmoghaddam majid ghayori
        Due to the increasing use of industrial Internet of Things (IIoT) systems, one of the most widely used security mechanisms is intrusion detection system (IDS) in the IIoT. In these systems, deep learning techniques are increasingly used to detect attacks, anomalies or i More
        Due to the increasing use of industrial Internet of Things (IIoT) systems, one of the most widely used security mechanisms is intrusion detection system (IDS) in the IIoT. In these systems, deep learning techniques are increasingly used to detect attacks, anomalies or intrusions. In deep learning, the most important challenge for training neural networks is determining the hyperparameters in these networks. To overcome this challenge, we have presented a hybrid approach to automate hyperparameter tuning in deep learning architecture by eliminating the human factor. In this article, an IDS in IIoT based on convolutional neural networks (CNN) and recurrent neural network based on short-term memory (LSTM) using metaheuristic algorithms of particle swarm optimization (PSO) and Whale (WOA) is used. This system uses a hybrid method based on neural networks and metaheuristic algorithms to improve neural network performance and increase detection rate and reduce neural network training time. In our method, considering the PSO-WOA algorithm, the hyperparameters of the neural network are determined automatically without the intervention of human agent. In this paper, UNSW-NB15 dataset is used for training and testing. In this research, the PSO-WOA algorithm has use optimized the hyperparameters of the neural network by limiting the search space, and the CNN-LSTM neural network has been trained with this the determined hyperparameters. The results of the implementation indicate that in addition to automating the determination of hyperparameters of the neural network, the detection rate of are method improve 98.5, which is a good improvement compared to other methods. Manuscript profile
      • Open Access Article

        5 - Multi-Level Ternary Quantization for Improving Sparsity and Computation in Embedded Deep Neural Networks
        Hosna Manavi Mofrad ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does not take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile